Your browser doesn't support javascript.
Show: 20 | 50 | 100
Results 1 - 2 de 2
Filter
Add filters

Database
Language
Document Type
Year range
1.
Medical Imaging 2022: Computer-Aided Diagnosis ; 12033, 2022.
Article in English | Scopus | ID: covidwho-1923076

ABSTRACT

Automated analysis of chest imaging in coronavirus disease (COVID-19) has mostly been performed on smaller datasets leading to overfitting and poor generalizability. Training of deep neural networks on large datasets requires data labels. This is not always available and can be expensive to obtain. Self-supervision is being increasingly used in various medical imaging tasks to leverage large amount of unlabeled data during pretraining. Our proposed approach pretrains a vision transformer to perform two self-supervision tasks - image reconstruction and contrastive learning on a Chest Xray (CXR) dataset. In the process, we generate more robust image embeddings. The reconstruction module models visual semantics within the lung fields by reconstructing the input image through a mechanism which mimics denoising and autoencoding. On the other hand, the constrastive learning module learns the concept of similarity between two texture representations. After pretraining, the vision transformer is used as a feature extractor towards a clinical outcome prediction task on our target dataset. The pretraining multi-kaggle dataset comprises 27499 CXR scans while our target dataset contains 530 images. Specifically, our framework predicts ventilation and mortality outcomes for COVID-19 positive patients using baseline CXR. We compare our method against a baseline approach using pretrained ResNet50 features. Experimental results demonstrate that our proposed approach outperforms the supervised method. © 2022 SPIE.

2.
24th International Conference on Medical Image Computing and Computer Assisted Intervention, MICCAI 2021 ; 12905 LNCS:824-833, 2021.
Article in English | Scopus | ID: covidwho-1469657

ABSTRACT

COVID-19 image analysis has mostly focused on diagnostic tasks using single timepoint scans acquired upon disease presentation or admission. We present a deep learning-based approach to predict lung infiltrate progression from serial chest radiographs (CXRs) of COVID-19 patients. Our method first utilizes convolutional neural networks (CNNs) for feature extraction from patches within the concerned lung zone, and also from neighboring and remote boundary regions. The framework further incorporates a multi-scale Gated Recurrent Unit (GRU) with a correlation module for effective predictions. The GRU accepts CNN feature vectors from three different areas as input and generates a fused representation. The correlation module attempts to minimize the correlation loss between hidden representations of concerned and neighboring area feature vectors, while maximizing the loss between the same from concerned and remote regions. Further, we employ an attention module over the output hidden states of each encoder timepoint to generate a context vector. This vector is used as an input to a decoder module to predict patch severity grades at a future timepoint. Finally, we ensemble the patch classification scores to calculate patient-wise grades. Specifically, our framework predicts zone-wise disease severity for a patient on a given day by learning representations from the previous temporal CXRs. Our novel multi-institutional dataset comprises sequential CXR scans from N = 93 patients. Our approach outperforms transfer learning and radiomic feature-based baseline approaches on this dataset. © 2021, Springer Nature Switzerland AG.

SELECTION OF CITATIONS
SEARCH DETAIL